Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Single image super-resolution algorithm based on unified iterative least squares regulation
ZHAO Xiaole, WU Yadong, TIAN Jinsha, ZHANG Hongying
Journal of Computer Applications    2016, 36 (3): 800-805.   DOI: 10.11772/j.issn.1001-9081.2016.03.800
Abstract449)      PDF (984KB)(419)       Save
Machine learning based image Super-Resolution (SR) has been proved to be a promising single-image SR technology, in which sparseness representation and dictionary learning has become the hotspot. Aiming at the time-consuming dictionary training and low-accuracy SR recovery, an SR algorithm was proposed from the perspective of reducing the inconsistency between Low-Resolution (LR) feature and High-Resolution (HR) feature spaces as far as possible. The authors adopted Iterative Least Squares Dictionary Learning Algorithm (ILS-DLA) to train LR/HR dictionaries and Anchored Neighborhood Regression (ANR) to recover HR images. ILS-DLA was able to train LR/HR dictionaries in relatively short time because of its integral optimization procedure, by adopting the same optimization strategy of ANR, which theoretically reduced the diversity between LR/HR dictionaries effectively. A large number of experiments show that the proposed method achieves superior dictionary learning to K-means Singular Value Decomposition ( K-SVD) and Beta Process Joint Dictionary Learning (BPJDL) algorithms etc., and provides better image restoration results than other state-of-the-art SR algorithms.
Reference | Related Articles | Metrics
No-reference image quality assessment based on scale invariance
TIAN Jinsha, HAN Yongguo, WU Yadong, ZHAO Xiaole, ZHANG Hongying
Journal of Computer Applications    2016, 36 (3): 789-794.   DOI: 10.11772/j.issn.1001-9081.2016.03.789
Abstract501)      PDF (1088KB)(398)       Save
The existing general no-reference image quality assessment methods mostly use machine learning method to learn regression models from training images with associated human subjective scores to predict the perceptual quality of testing image. However, such opinion-aware methods expend much time on training, and rely on the distortion types of the training database. These methods have weak generalization capability, hereby limiting their usability in practice. To solve the database dependence, a normalized scale invariance based no-reference image quality assessment method was proposed. In the proposed method, the Natural Scene Statistic (NSS) feature and edge characteristic were combined as the valid features for image quality assessment, and no extra information was required beyond the testing image, then the two feature vectors were used to compute the global difference across scales as the image quality score. The experimental results show that the proposed method has good evaluation for multi-distorted images with low computational complexity. Compared to the state-of-the-art no-reference image quality assessment models, the proposed method has better comprehensive performance, and it is suitable for applications.
Reference | Related Articles | Metrics
Low-illumination image enhancement based on physical model
WANG Xiaoyuan, ZHANG Hongying, WU Yadong, LIU Yan
Journal of Computer Applications    2015, 35 (8): 2301-2304.   DOI: 10.11772/j.issn.1001-9081.2015.08.2301
Abstract608)      PDF (825KB)(657)       Save

Since a low-illumination image will become a pseudo fog map after inversion, and the concentration of this pseudo fog map is decided by illumination rather than depth of field, a low-illumination image enhancement method based on physical model was proposed, which provided a fast and accurate method to estimate the transmittance. Firstly, dark channel prior was used to estimate atmospheric light value of pseudo fog map and the transmittance was estimated according to the illumination. Secondly, the image without fog was restored based on the atmospheric scattering mode. Finally, the enhanced image was obtained by inversing the image without fog. Furthermore, the clear image was got by making detail compensation on the enhanced image. A large number of experiments show that the proposed algorithm is faster and performs well without losing information compared with the existing algorithms including the enhancement algorithms based on dark channel prior, defogging techniques and the multi-scale Retinex with color restoration, meanwhile it can improve the efficiency of image analysis and recognition system.

Reference | Related Articles | Metrics
Polynomial interpolation algorithm framework based on osculating polynomial approximation
ZHAO Xiaole, WU Yadong, ZHANG Hongying, ZHAO Jing
Journal of Computer Applications    2015, 35 (8): 2266-2273.   DOI: 10.11772/j.issn.1001-9081.2015.08.2266
Abstract509)      PDF (1379KB)(348)       Save

Polynomial interpolation technique is a common approximation method in approximation theory, which is widely used in numerical analysis, signal processing, and so on. Traditional polynomial interpolation algorithms are mainly developed by combining numerical analysis with experimental results, lacking of unified theoretical description and regular solution. A uniform theoretical framework for polynomial interpolation algorithm based on osculating polynomial approximation theory was proposed here. Existing interpolation algorithms could be analyzed and new algorithms could be developed under this framework, which consists of the number of sample points, osculating order for sample points and derivative approximation rules. The presentation of existing mainstream interpolation algorithms was analyzed in proposed framework, and the general process for developing new algorithms was shown by using a four-point and two-order osculating polynomial interpolation. Theoretical analysis and numerical experiments show that almost all mainstream polynomial interpolation algorithms belong to osculating polynomial interpolation, and their effects are strongly related to the number of sampling points, order of osculating, and derivative approximation rules.

Reference | Related Articles | Metrics
Ranking- k: effective subspace dominating query algorithm
LI Qiusheng, WU Yadong, LIN Maosong, WANG Song, WANG Haiyang, FENG Xinmiao
Journal of Computer Applications    2015, 35 (1): 108-114.   DOI: 10.11772/j.issn.1001-9081.2015.01.0108
Abstract522)      PDF (1078KB)(667)       Save

Top-k dominating query algorithm requires high consumption of time and space to build combined indexes on the attributes, and the query accuracy is low for the data with same attribute values. To solve these problems, a Ranking-k algorithm was given in this paper. The proposed Ranking-k algorithm is a new subspace dominating query algorithm combining the B+-trees with probability distribution model. Firstly, the ordered lists for each data attribute were constructed by the B+-trees. Secondly, the round-robin scheduling algorithm was used to scan ordered attribute lists satisfying skyline criterion. Some candidate tuples were generated and k end tuples were obtained. Thirdly, the dominated scores of end tuples were calculated by using the probability distribution model according to the generated candidate tuples and end tuples. Through iterating the above process, the optimal query results were obtained. The experimental results show that the overall query efficiency of the proposed Ranking-k algorithm is improved by 94.43% compared with the Basic-Scan Algorithm (BSA) and by 7.63% compared with the Differential Algorithm (DA), and the query results of Ranking-k algorithm are much closer to theoretical values in comparison of the Top-k Dominating with Early Pruning (TDEP) algorithm, BSA and DA.

Reference | Related Articles | Metrics
Real-time image/video haze removal algorithm with color restoration
DIAO Yangjie ZHANG Hongying WU Yadong CHEN Meng
Journal of Computer Applications    2014, 34 (9): 2702-2707.   DOI: 10.11772/j.issn.1001-9081.2014.09.2702
Abstract483)      PDF (1045KB)(490)       Save

To overcome the defects of the existing algorithms, such as the poor real-time performance, bad effect in sky area and dark dehazed image, a real-time image haze removal algorithm was proposed. Firstly, dark channel prior was used to estimate the rough transmission map. Secondly, the method of optimized guided filtering was used to refine the down-sampled rough transmission map, which can real-time process higher resolution image. Thirdly, refined transmission map was up-sampled and corrected to obtain the final transmission map, which can overcome the defect of bad effect in sky area. Finally, the clear image was got by adaptive brightness adjustment with color restoration. The complexity of the algorithm is only a linear function of the number of input image pixels, which brings a very fast implementation. For the image which resolution is 600×400, the processing time is 80ms.

Reference | Related Articles | Metrics
Fast haze removal algorithm for single image based on human visual characteristics
ZHANG Hongying ZHANG Sainan WU Yadong WU Bin
Journal of Computer Applications    2014, 34 (6): 1753-1757.   DOI: 10.11772/j.issn.1001-9081.2014.06.1753
Abstract271)      PDF (953KB)(409)       Save

In order to remove the effect of weather in degraded image, a fast haze removal algorithm for single image based on human visual characteristics was proposed. According to the luminance distribution of the hazy image and the human visual characteristics, the proposed method first applied luminance component to estimate coarse transmission map, then used a linear spatial filter to refine the transmission map and obtained the dehazed image by the atmospheric scattering model. Finally a new image enhancement fitting function was applied to enhance the luminance component of the dehazed image to make it more natural and clear. The experimental results show that the proposed algorithm effectively removes haze and is better than the existing algorithms in terms of contrast, information entropy and computing time.

Reference | Related Articles | Metrics
Reverse curvature-driven super-resolution algorithm based on Taylor formula
ZHAO Xiaole WU Yadong ZHANG Hongying ZHAO Jing
Journal of Computer Applications    2014, 34 (12): 3570-3575.  
Abstract164)      PDF (948KB)(575)       Save

To solve the problem of traditional interpolation and model-based methods usually leading to decrease of the contrast and sharpness of images, a reverse curvature-driven Super-Resolution (SR) algorithm based on Taylor formula was proposed. The algorithm used the Taylor formula to estimate the change trend of image intensity, and then the image edge features were detailed by the curvature of isophote. Gradients were used as constraints to inhibit the jagged edges and ringing effects. The experimental resluts show that the proposed algorithm has obvious advantages over the conventional interpolation algorithm and model-based methods in clarity and information retention, and its result is more in line with human visual effects. The proposed algorithm is more effective than traditional iterative algorithms for reverse diffusion based on Taylor expansion is implemented.

Reference | Related Articles | Metrics
Video key frame extraction method based on image dominant color
WANG Song HAN Yongguo WU Yadong ZHANG Sainan
Journal of Computer Applications    2013, 33 (09): 2631-2635.   DOI: 10.11772/j.issn.1001-9081.2013.09.2631
Abstract527)      PDF (852KB)(544)       Save
Video key frame reflects the main content of the video sequence. Video key frame extraction is one of the key steps for video content retrieval. Although there are some effective key frame extraction algorithms, these algorithms still have some problems such as heavy load of computing, difficulty in choosing suitable threshold value for different type sequences and limited types of videos. In this paper, a video key frame extraction method based on frame dominant color was proposed. Firstly, every frame was simplified by the dominant color which was obtained by octree structure color quantization algorithm. Secondly, shot boundary was detected according to the color similarity between adjacent frames. Finally, key frames were decided from candidate frames by K-means clustering algorithm. The experimental results show that the proposed method is simpler in computation and requires lower time and space complexity than other key frame extraction methods.
Related Articles | Metrics